28 research outputs found

    The BBFS Filesystem Model

    No full text

    Performance comparison of ide and scsi disks

    No full text
    It is widely believed that the IDE disks found in PCs are inexpensive but slow, whereas the SCSI disks used in servers and workstations are faster, more reliable, and more manageable. The belief that current IDE disks have performance and reliability disadvantages has been called into question by several recent reports. Thus we consider the possibility of achieving tremendous cost advantages by using IDE disks as the foundation of a storage system. In this paper, we give an extensive performance comparison of IDE and SCSI disks. We measure their performance on a variety of micro benchmarks and macro benchmarks, and we explain these results with the help of kernel instrumentation and device activity traces collected by a SCSI analyzer. We consider the impact of several factors, including sequential vs. random workloads, file system enhancements such as journaling and Soft Updates, I/O scheduling in the kernel vs. in the disk drive (as enabled by tagged queuing), and the use of RAID technology to obtain I/O parallelism. In our testbed we find that the IDE disk is faster than the SCSI disk for sequential I/O, but the SCSI disk is faster for random I/O. We also observe that the random I/O performance deficit of the IDE disk is partly overcome by kernel I/O scheduling, and is further mitigated by scheduling in the drive (as enabled by tagged queuing), and by the use of journaling and Soft Updates. Taken as a whole, our results lead us to conclude that RAID systems based on IDE drives can be both faster and significantly less expensive than SCSI RAID systems.

    Obtaining high performance for storage outsourcing

    No full text

    Modeling and optimizing I/O throughput of multiple disks on a bus

    No full text
    In modern I/O architectures, multiple disk drives are attached to each I/O controller. A study of the performance of such architectures under I/O-intensive workloads has revealed a performance impairment that results from apreviously unknown form of convoy behavior in disk I/O. In this paper, we describe measurements of the read performance of multiple disks that share a SCSI bus under a heavy workload, and develop and validate formulas that accurately characterize the observed performance (to within 12 % on several platforms for I/O sizes in the range 16{128 KB). Two terms in the formula clearly characterize the lost performance seen in our experiments. We describe techniques to deal with the performance impairment, via user-level workarounds that achieve greater overlap of bus transfers with disk seeks, and that increase the percentage of transfers that occur at the full bus bandwidth rather than at the lower bandwidth of a disk head. Experiments show bandwidth improvements of 10{20 % when using these user-level techniques, but only in the case of large I/Os
    corecore